3 research outputs found

    Complexity-Aware Scheduling for an LDPC Encoded C-RAN Uplink

    Full text link
    Centralized Radio Access Network (C-RAN) is a new paradigm for wireless networks that centralizes the signal processing in a computing cloud, allowing commodity computational resources to be pooled. While C-RAN improves utilization and efficiency, the computational load occasionally exceeds the available resources, creating a computational outage. This paper provides a mathematical characterization of the computational outage probability for low-density parity check (LDPC) codes, a common class of error-correcting codes. For tractability, a binary erasures channel is assumed. Using the concept of density evolution, the computational demand is determined for a given ensemble of codes as a function of the erasure probability. The analysis reveals a trade-off: aggressively signaling at a high rate stresses the computing pool, while conservatively backing-off the rate can avoid computational outages. Motivated by this trade-off, an effective computationally aware scheduling algorithm is developed that balances demands for high throughput and low outage rates.Comment: Conference on Information Sciences and Systems (CISS) 2017, to appea

    Complexity aware C-RAN scheduling for LDPC codes over BEC

    Get PDF
    Effective transmission of data over a noisy wireless channel is a vital part of today\u27s high speed technology driven society. In a wireless cell network, information is sent from mobile users to base stations. The information being transmitted is protected by error-control codes. In a conventional architecture the signal processing, including error-control decoding, is performed locally at each base station. Recently, a new architecture has emerged called Centralized Radio Access Network (C-RAN), which involves the centralized processing of the signals in a computing cloud. Using a computing cloud allows computational resources to be pooled, which improves utilization and efficiency. When the computational resources are finite and when the computational load varies over time, then there is a chance that the load exceeds the available resources. This situation creates a so-called computational outage, which has characteristics that are similar to outages caused by channel fading or interference. In this report, the computational complexity is quantified for a common class of error-correcting codes known as low-density parity check (LDPC) codes. To make the analysis tractable, a binary erasure channel is assumed. The concept of density evolution is used to obtain the complexity as a function of the code design parameters and the signal-to-interference-plus-noise ratio (SINR) of the channel. The analysis shows that there is a trade-off in that aggressively signaling at a high data rate causes high computational demands, while conservatively backing off on the rate can dramatically reduce the computational demand. Motivated by this trade-off, a scheduling algorithm is developed that balances the demands for high throughput and low computational outage rates

    Powdery mildews

    No full text
    corecore